翻訳と辞書
Words near each other
・ Task Force Sinai
・ Task Force Tarawa
・ Task Force Thiele
・ Task Force Tips
・ Task Force to Bring Back the Don
・ Task Force Tripoli
・ Task Force Uruzgan
・ Task Force Viking
・ Task Force White Eagle
・ Task lighting
・ Task loading
・ Task management
・ Task Management Function
・ Task manager
・ Task Manager (Windows)
Task parallelism
・ TASK party
・ Task state segment
・ Task switching
・ Task switching (psychology)
・ Task system
・ Task View (Windows)
・ Task-based language learning
・ Task-focused interface
・ Task-invoked pupillary response
・ Task-negative
・ Task-oriented and relationship-oriented leadership
・ Task-oriented information modelling
・ Task-positive network
・ Taska Film


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Task parallelism : ウィキペディア英語版
Task parallelism

Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing tasks—concretely performed by processes or threads—across different parallel computing nodes. It contrasts to data parallelism as another form of parallelism.
==Description==
In a multiprocessor system, task parallelism is achieved when each processor executes a different thread (or process) on the same or different data. The threads may execute the same or different code. In the general case, different execution threads communicate with one another as they work. Communication usually takes place by passing data from one thread to the next as part of a workflow.
As a simple example, if we are running code on a 2-processor system (CPUs "a" & "b") in a parallel environment and we wish to do tasks "A" and "B", it is possible to tell CPU "a" to do task "A" and CPU "b" to do task 'B" simultaneously, thereby reducing the run time of the execution. The tasks can be assigned using conditional statements as described below.
Task parallelism emphasizes the distributed (parallelized) nature of the processing (i.e. threads), as opposed to the data (data parallelism). Most real programs fall somewhere on a continuum between task parallelism and data parallelism.
Thread-level parallelism (TLP) is the parallelism inherent in an application that runs multiple threads at once. This type of parallelism is found largely in applications written for commercial servers such as databases. By running many threads at once, these applications are able to tolerate the high amounts of I/O and memory system latency their workloads can incur - while one thread is delayed waiting for a memory or disk access, other threads can do useful work.
The exploitation of thread-level parallelism has also begun to make inroads into the desktop market with the advent of multi-core microprocessors. This has occurred because, for various reasons, it has become increasingly impractical to increase either the clock speed or instructions per clock of a single core. If this trend continues, new applications will have to be designed to utilize multiple threads in order to benefit from the increase in potential computing power. This contrasts with previous microprocessor innovations in which existing code was automatically speed up by running it on a newer/faster computer.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Task parallelism」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.